17 research outputs found
UniMax: Fairer and more Effective Language Sampling for Large-Scale Multilingual Pretraining
Pretrained multilingual large language models have typically used heuristic
temperature-based sampling to balance between different languages. However
previous work has not systematically evaluated the efficacy of different
pretraining language distributions across model scales. In this paper, we
propose a new sampling method, UniMax, that delivers more uniform coverage of
head languages while mitigating overfitting on tail languages by explicitly
capping the number of repeats over each language's corpus. We perform an
extensive series of ablations testing a range of sampling strategies on a suite
of multilingual benchmarks, while varying model scale. We find that UniMax
outperforms standard temperature-based sampling, and the benefits persist as
scale increases. As part of our contribution, we release: (i) an improved and
refreshed mC4 multilingual corpus consisting of 29 trillion characters across
107 languages, and (ii) a suite of pretrained umT5 model checkpoints trained
with UniMax sampling
Understanding HTML with Large Language Models
Large language models (LLMs) have shown exceptional performance on a variety
of natural language tasks. Yet, their capabilities for HTML understanding --
i.e., parsing the raw HTML of a webpage, with applications to automation of
web-based tasks, crawling, and browser-assisted retrieval -- have not been
fully explored. We contribute HTML understanding models (fine-tuned LLMs) and
an in-depth analysis of their capabilities under three tasks: (i) Semantic
Classification of HTML elements, (ii) Description Generation for HTML inputs,
and (iii) Autonomous Web Navigation of HTML pages. While previous work has
developed dedicated architectures and training procedures for HTML
understanding, we show that LLMs pretrained on standard natural language
corpora transfer remarkably well to HTML understanding tasks. For instance,
fine-tuned LLMs are 12% more accurate at semantic classification compared to
models trained exclusively on the task dataset. Moreover, when fine-tuned on
data from the MiniWoB benchmark, LLMs successfully complete 50% more tasks
using 192x less data compared to the previous best supervised model. Out of the
LLMs we evaluate, we show evidence that T5-based models are ideal due to their
bidirectional encoder-decoder architecture. To promote further research on LLMs
for HTML understanding, we create and open-source a large-scale HTML dataset
distilled and auto-labeled from CommonCrawl
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
Transfer learning, where a model is first pre-trained on a data-rich task
before being fine-tuned on a downstream task, has emerged as a powerful
technique in natural language processing (NLP). The effectiveness of transfer
learning has given rise to a diversity of approaches, methodology, and
practice. In this paper, we explore the landscape of transfer learning
techniques for NLP by introducing a unified framework that converts all
text-based language problems into a text-to-text format. Our systematic study
compares pre-training objectives, architectures, unlabeled data sets, transfer
approaches, and other factors on dozens of language understanding tasks. By
combining the insights from our exploration with scale and our new ``Colossal
Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks
covering summarization, question answering, text classification, and more. To
facilitate future work on transfer learning for NLP, we release our data set,
pre-trained models, and code.Comment: Final version as published in JML